22 research outputs found

    Experimental Prediction of the Fracture of 6XXX Aluminum Alloys

    Full text link
    The fracture behavior of Al-6DR1 sheets during the stamping process is of specific importance in the automotive industry. Efforts were made to reduce the costs associated with fracture prediction by using numerical simulations instead of experimental testing. The motivation for developing the fracture surface is to improve the prediction of fractures in simulation that then can be used to guide the stamping/forming tool design process. The theoretical framework employed in this thesis is based on two fracture models to predict the material behavior: The Modified Mohr-Coulomb (MMC) and the Hosford-Coulomb (HC). Furthermore, a hybrid and a direct calibration method are used to get the models parameters. The hybrid method is based on a numerical-experimental approach to get the variation of triaxiality and lode angle during deformation and it is also coupled with a damage accumulation rule. While the direct calibration method is based on a pure experimental approach, where the triaxiality on lode angle were assumed to be constant for the suggested experiments all the way to the fracture initiation stage. The specific tests used in the direct calibration are hemispherical punch stretching tests to induce equi- biaxial strain, pure shear tests, 3- point bend tests and Marciniak tests to induce plane strain, and hole-expansion tests to induce fracture under uniaxial tension strain. To capture the effects of stress triaxiality and lode angle experienced in the material fabrication process, a range of stress states including was needed including pure shear, uniaxial tension, plane strain tension and equi-biaxial tension. The generated fracture surface is then validated and incorporated in numerical models that simulate the deformation process and allow for prediction of critical locations part locations that are likely to fracture during forming. Such predictive capabilities are important in the tool design stage.Master of Science in EngineeringIndustrial and Systems Engineering, College of Engineering & Computer ScienceUniversity of Michigan-Dearbornhttps://deepblue.lib.umich.edu/bitstream/2027.42/143523/1/Thesis-update4-26-2018(Final).pdfDescription of Thesis-update4-26-2018(Final).pdf : Thesi

    Investigating Explanations in Conditional and Highly Automated Driving: The Effects of Situation Awareness and Modality

    Full text link
    With the level of automation increases in vehicles, such as conditional and highly automated vehicles (AVs), drivers are becoming increasingly out of the control loop, especially in unexpected driving scenarios. Although it might be not necessary to require the drivers to intervene on most occasions, it is still important to improve drivers' situation awareness (SA) in unexpected driving scenarios to improve their trust in and acceptance of AVs. In this study, we conceptualized SA at the levels of perception (SA L1), comprehension (SA L2), and projection (SA L3), and proposed an SA level-based explanation framework based on explainable AI. Then, we examined the effects of these explanations and their modalities on drivers' situational trust, cognitive workload, as well as explanation satisfaction. A three (SA levels: SA L1, SA L2 and SA L3) by two (explanation modalities: visual, visual + audio) between-subjects experiment was conducted with 340 participants recruited from Amazon Mechanical Turk. The results indicated that by designing the explanations using the proposed SA-based framework, participants could redirect their attention to the important objects in the traffic and understand their meaning for the AV system. This improved their SA and filled the gap of understanding the correspondence of AV's behavior in the particular situations which also increased their situational trust in AV. The results showed that participants reported the highest trust with SA L2 explanations, although the mental workload was assessed higher in this level. The results also provided insights into the relationship between the amount of information in explanations and modalities, showing that participants were more satisfied with visual-only explanations in the SA L1 and SA L2 conditions and were more satisfied with visual and auditory explanations in the SA L3 condition

    Modeling Dispositional and Initial learned Trust in Automated Vehicles with Predictability and Explainability

    Get PDF
    Technological advances in the automotive industry are bringing automated driving closer to road use. However, one of the most important factors affecting public acceptance of automated vehicles (AVs) is the public's trust in AVs. Many factors can influence people's trust, including perception of risks and benefits, feelings, and knowledge of AVs. This study aims to use these factors to predict people's dispositional and initial learned trust in AVs using a survey study conducted with 1175 participants. For each participant, 23 features were extracted from the survey questions to capture his or her knowledge, perception, experience, behavioral assessment, and feelings about AVs. These features were then used as input to train an eXtreme Gradient Boosting (XGBoost) model to predict trust in AVs. With the help of SHapley Additive exPlanations (SHAP), we were able to interpret the trust predictions of XGBoost to further improve the explainability of the XGBoost model. Compared to traditional regression models and black-box machine learning models, our findings show that this approach was powerful in providing a high level of explainability and predictability of trust in AVs, simultaneously

    From Manual Driving to Automated Driving: A Review of 10 Years of AutoUI

    Full text link
    This paper gives an overview of the ten-year devel- opment of the papers presented at the International ACM Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutoUI) from 2009 to 2018. We categorize the topics into two main groups, namely, manual driving-related research and automated driving-related re- search. Within manual driving, we mainly focus on studies on user interfaces (UIs), driver states, augmented reality and head-up displays, and methodology; Within automated driv- ing, we discuss topics, such as takeover, acceptance and trust, interacting with road users, UIs, and methodology. We also discuss the main challenges and future directions for AutoUI and offer a roadmap for the research in this area.https://deepblue.lib.umich.edu/bitstream/2027.42/153959/1/From Manual Driving to Automated Driving: A Review of 10 Years of AutoUI.pdfDescription of From Manual Driving to Automated Driving: A Review of 10 Years of AutoUI.pdf : Main articl

    Analyzing Customer Needs of Product Ecosystems Using Online Product Reviews

    Full text link
    It is necessary to analyze customer needs of a product ecosystem in order to increase customer satisfaction and user experience, which will, in turn, enhance its business strategy and profits. However, it is often time-consuming and challenging to identify and analyze customer needs of product ecosystems using traditional methods due to numerous products and services as well as their interdependence within the product ecosystem. In this paper, we analyzed customer needs of a product ecosystem by capitalizing on online product reviews of multiple products and services of the Amazon product ecosystem with machine learning techniques. First, we filtered the noise involved in the reviews using a fastText method to categorize the reviews into informative and uninformative regarding customer needs. Second, we extracted various customer needs related topics using a latent Dirichlet allocation technique. Third, we conducted sentiment analysis using a valence aware dictionary and sentiment reasoner method, which not only predicted the sentiment of the reviews, but also its intensity. Based on the first three steps, we classified customer needs using an analytical Kano model dynamically. The case study of Amazon product ecosystem showed the potential of the proposed method.https://deepblue.lib.umich.edu/bitstream/2027.42/153962/1/ANALYZING CUSTOMER NEEDS OF PRODUCT ECOSYSTEMS USING ONLINE PRODUCT REVIEWS.pdfDescription of ANALYZING CUSTOMER NEEDS OF PRODUCT ECOSYSTEMS USING ONLINE PRODUCT REVIEWS.pdf : Main articl

    Building Trust Profiles in Conditionally Automated Driving

    Full text link
    Trust is crucial for ensuring the safety, security, and widespread adoption of automated vehicles (AVs), and if trust is lacking, drivers and the public may not be willing to use them. This research seeks to investigate trust profiles in order to create personalized experiences for drivers in AVs. This technique helps in better understanding drivers' dynamic trust from a persona's perspective. The study was conducted in a driving simulator where participants were requested to take over control from automated driving in three conditions that included a control condition, a false alarm condition, and a miss condition with eight takeover requests (TORs) in different scenarios. Drivers' dispositional trust, initial learned trust, dynamic trust, personality, and emotions were measured. We identified three trust profiles (i.e., believers, oscillators, and disbelievers) using a K-means clustering model. In order to validate this model, we built a multinomial logistic regression model based on SHAP explainer that selected the most important features to predict the trust profiles with an F1-score of 0.90 and accuracy of 0.89. We also discussed how different individual factors influenced trust profiles which helped us understand trust dynamics better from a persona's perspective. Our findings have important implications for designing a personalized in-vehicle trust monitoring and calibrating system to adjust drivers' trust levels in order to improve safety and experience in automated driving

    Otto: An Autonomous School Bus System for Parents and Children

    Full text link
    Technological advances in autonomous transportation systems have brought them closer to road use. However, little research is reported on children’s behavior in autonomous buses (ABs) under real road conditions and on improving parents’ trust in leaving their children alone in ABs. Thus, we aim to answer the research question: “How can we design ABs suitable for unaccompanied children so that the parents can trust them?” We conducted a study using a Wizard-of-Oz method to observe children’s behavior and interview both parents and children to examine their needs in ABs. Using an affinity diagram, we grouped children’s and parents’ needs under the following categories: entertainment, communication, personal behavior, trust and desires. Using an iterative human-centered design process, we created an Otto system, a smartphone app for parents to communicate with their children and a tablet app for children to entertain during the ride.Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/153797/1/chi20e-sub1381-cam-i15.pd

    Combat COVID-19 Infodemic Using Explainable Natural Language Processing Models

    Full text link
    Misinformation of COVID-19 is prevalent on social media as the pandemic un- folds, and the associated risks are extremely high. Thus, it is critical to detect and combat such misinformation. Recently, deep learning models using natural language processing techniques, such as BERT (Bidirectional Encoder Represen- tations from Transformers), have achieved great successes in detecting misinfor- mation. In this paper, we proposed an explainable natural language processing model based on DistilBERT and SHAP (Shapley Additive exPlanations) to com- bat misinformation about COVID-19 due to their efficiency and effectiveness. First, we collected a dataset of 984 claims about COVID-19 with fact checking. By augmenting the data using back-translation, we doubled the sample size of the dataset and the DistilBERT model was able to obtain good performance (accuracy: 0.972; areas under the curve: 0.993) in detecting misinformation about COVID-19. Our model was also tested on a larger dataset for AAAI2021 - COVID-19 Fake News Detection Shared Task and obtained good performance (accuracy: 0.938; areas under the curve: 0.985). The performance on both datasets was better than traditional machine learning models. Second, in or- der to boost public trust in model prediction, we employed SHAP to improve model explainability, which was further evaluated using a between-subjects ex- periment with three conditions, i.e., text (T), text+SHAP explanation (TSE), and text+SHAP explanation+source and evidence (TSESE). The participants were significantly more likely to trust and share information related to COVID- 19 in the TSE and TSESE conditions than in the T condition. Our results provided good implications in detecting misinformation about COVID-19 and improving public trust.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/166319/1/Covid_Information_Processing_and_Management.pdfSEL

    Modeling Dispositional and Initial learned Trust in Automated Vehicles with Predictability and Explainability

    Get PDF
    Technological advances in the automotive industry are bringing automated driving closer to road use. However, one of the most important factors affecting public acceptance of automated vehicles (AVs) is the public’s trust in AVs. Many factors can influence people’s trust, including perception of risks and benefits, feelings, and knowledge of AVs. This study aims to use these factors to predict people’s dispositional and initial learned trust in AVs using a survey study conducted with 1175 participants. For each participant, 23 features were extracted from the survey questions to capture his/her knowledge, perception, experience, behavioral assessment, and feelings about AVs. These features were then used as input to train an eXtreme Gradient Boosting (XGBoost) model to predict trust in AVs. With the help of SHapley Additive exPlanations (SHAP), we were able to interpret the trust predictions of XGBoost to further improve the explainability of the XGBoost model. Compared to traditional regression models and black-box machine learning models, our findings show that this approach was powerful in providing a high level of explainability and predictability of trust in AVs, simultaneously.http://deepblue.lib.umich.edu/bitstream/2027.42/163772/1/Modeling Perceived Trust in AV_Revised_R21.pdfSEL
    corecore